Description of problem: ======================== After updating rhel7.2 with rhgs to 7.3, gettting the below AVC message after node reboot/glusterd restart. type=AVC msg=audit(1471946614.154:109): avc: denied { name_bind } for pid=2302 comm="glusterd" src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket This is happening with layered installation as well. Version-Release number of selected component (if applicable): =============================================================== RHEL: 7.3 ( 3.10.0-493.el7.x86_64 ) RHGS: glusterfs-3.7.9-10. How reproducible: ================= Always Steps to Reproduce: =================== 1.Have rhel7.2 RHGS 3.1.3 ( 3.7.9-10) node 2.Create simple Distribute volume and start it 3.Update the rhel version from 7.2 to 7.3 4.reboot the node for kernel update 5. Check the audit logs for avc messages ( grep -ri "avc" /var/log/audit/audit.log ) Now, after every glusterd restart, you will see the AVC denial message related to glusterd. Actual results: ================ Getting the below AVC denial message: type=AVC msg=audit(1471946614.154:109): avc: denied { name_bind } for pid=2302 comm="glusterd" src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket Expected results: ================= Should not get the AVC denail message after update to rhel7.3 Additional info:
Some info: ========== Always getting 61000 src port AVC message after glusterd restart in the audit.log netstat details on the node: ~]# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:752 0.0.0.0:* LISTEN 31096/glusterfs tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1814/sshd tcp 0 0 0.0.0.0:49178 0.0.0.0:* LISTEN 29144/glusterfsd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 31096/glusterfs tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 31096/glusterfs tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 31096/glusterfs tcp 0 0 0.0.0.0:16514 0.0.0.0:* LISTEN 28482/libvirtd tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN 1799/nrpe tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 31096/glusterfs tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 31096/glusterfs tcp 0 0 0.0.0.0:46405 0.0.0.0:* LISTEN 31113/rpc.statd tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 30956/glusterd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp6 0 0 :::45237 :::* LISTEN 31113/rpc.statd tcp6 0 0 :::22 :::* LISTEN 1814/sshd tcp6 0 0 :::16514 :::* LISTEN 28482/libvirtd tcp6 0 0 :::5666 :::* LISTEN 1799/nrpe tcp6 0 0 :::111 :::* LISTEN 1/systemd udp 0 0 0.0.0.0:46624 0.0.0.0:* 1721/dhclient udp 0 0 0.0.0.0:625 0.0.0.0:* 1306/rpcbind udp 0 0 0.0.0.0:749 0.0.0.0:* 31096/glusterfs udp 0 0 127.0.0.1:766 0.0.0.0:* 31113/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 1721/dhclient udp 0 0 0.0.0.0:111 0.0.0.0:* 1306/rpcbind udp 0 0 0.0.0.0:52369 0.0.0.0:* 31113/rpc.statd udp 0 0 127.0.0.1:323 0.0.0.0:* 1339/chronyd udp6 0 0 :::30290 :::* 1721/dhclient udp6 0 0 :::625 :::* 1306/rpcbind udp6 0 0 :::46047 :::* 31113/rpc.statd udp6 0 0 :::111 :::* 1306/rpcbind udp6 0 0 ::1:323 :::* 1339/chronyd [root@ ~]#
This issue is there in rhgs rhel7.2 itself. We no need to do update from rhel7.2 to 7.3 to reproduce this issue. Just have rhgs rhel7.2 node, create and start a volume, restart glusterd and check for avc messages in the audit.log
The issue seems like because of the following entry in /proc/sys/net/ipv4/ip_local_port_range 32768 60999 Here the upper cap of the local port is 60999, however gluster maintains it local port range for its portmap logic which is up to 65535 and now when on a glusterd restart portmap table is rebuilt through pmap_registry_new (), bind () fails for 61000 port. There is an upstream patch http://review.gluster.org/#/c/14613/ posted for review which ensures that gluster depends on local port range details and pick a port from that range instead of maintaining its own. Please note this patch is not committed for 3.2.0. For now given there is no impact to the functionality, this can be marked as known issue and considered as part of future release (probably 3.2.1)
As http://review.gluster.org/#/c/14613/ going to fix this issue, moving the state to POST.
Is there something I can do to convince glusterd to resume to a port less than 61000 now that it has fixated on 61000. I "was" using version 3.8.4 on Fedora 24. Only one node is succumb for now. Thank you.
I hit this bug on my systemic setup in 3.2 with 3.8.4-5 build https://docs.google.com/spreadsheets/d/1iP5Mi1TewBFVh8HTmlcBm9072Bgsbgkr3CLcGmawDys/edit#gid=632186609
Also seen after upgrading to glusterfs-3.7.17-1.el7 on CentOS 7.2.1511, although it appears to not have been a fatal error; as near as I can tell, glusterd must have retried with a different port number that fell within the local port range.
*** Bug 1452699 has been marked as a duplicate of this bug. ***
Build: 3.12.2-8 Set the max-port to 60999 in glusterd.vol file and restarted glusterd. After that haven't seen any AVC denials in audit.log wrt glusterd Hence marking it as verified.
have updated the doc text. Kindly review and confirm
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607